43 research outputs found

    A Bernstein-Von Mises Theorem for discrete probability distributions

    Full text link
    We investigate the asymptotic normality of the posterior distribution in the discrete setting, when model dimension increases with sample size. We consider a probability mass function Ξ0\theta_0 on \mathbbm{N}\setminus \{0\} and a sequence of truncation levels (kn)n(k_n)_n satisfying kn3≀ninf⁥i≀knΞ0(i).k_n^3\leq n\inf_{i\leq k_n}\theta_0(i). Let Ξ^\hat{\theta} denote the maximum likelihood estimate of (Ξ0(i))i≀kn(\theta_0(i))_{i\leq k_n} and let Δn(Ξ0)\Delta_n(\theta_0) denote the knk_n-dimensional vector which ii-th coordinate is defined by \sqrt{n} (\hat{\theta}_n(i)-\theta_0(i)) for 1≀i≀kn.1\leq i\leq k_n. We check that under mild conditions on Ξ0\theta_0 and on the sequence of prior probabilities on the knk_n-dimensional simplices, after centering and rescaling, the variation distance between the posterior distribution recentered around Ξ^n\hat{\theta}_n and rescaled by n\sqrt{n} and the knk_n-dimensional Gaussian distribution N(Δn(Ξ0),I−1(Ξ0))\mathcal{N}(\Delta_n(\theta_0),I^{-1}(\theta_0)) converges in probability to 0.0. This theorem can be used to prove the asymptotic normality of Bayesian estimators of Shannon and R\'{e}nyi entropies. The proofs are based on concentration inequalities for centered and non-centered Chi-square (Pearson) statistics. The latter allow to establish posterior concentration rates with respect to Fisher distance rather than with respect to the Hellinger distance as it is commonplace in non-parametric Bayesian statistics.Comment: Published in at http://dx.doi.org/10.1214/08-EJS262 the Electronic Journal of Statistics (http://www.i-journals.org/ejs/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Speed of propagation for Hamilton-Jacobi equations with multiplicative rough time dependence and convex Hamiltonians

    Full text link
    We show that the initial value problem for Hamilton-Jacobi equations with multiplicative rough time dependence, typically stochastic, and convex Hamiltonians satisfies finite speed of propagation. We prove that in general the range of dependence is bounded by a multiple of the length of the "skeleton" of the path, that is a piecewise linear path obtained by connecting the successive extrema of the original one. When the driving path is a Brownian motion, we prove that its skeleton has almost surely finite length. We also discuss the optimality of the estimate

    Eikonal equations and pathwise solutions to fully non-linear SPDEs

    Get PDF
    We study the existence and uniqueness of the stochastic viscosity solutions of fully nonlinear, possibly degenerate, second order stochastic pde with quadratic Hamiltonians associated to a Riemannian geometry. The results are new and extend the class of equations studied so far by the last two authors

    Long-time behaviour of stochastic Hamilton-Jacobi equations

    Full text link
    The long-time behavior of stochastic Hamilton-Jacobi equations is analyzed, including the stochastic mean curvature flow as a special case. In a variety of settings, new and sharpened results are obtained. Among them are (i) a regularization by noise phenomenon for the mean curvature flow with homogeneous noise which establishes that the inclusion of noise speeds up the decay of solutions, and (ii) the long-time convergence of solutions to spatially inhomogeneous stochastic Hamilton-Jacobi equations. A number of motivating examples about nonlinear stochastic partial differential equations are presented in the appendix

    Error exponents for AR order testing

    No full text

    Order Estimation and Model Selection

    No full text
    reason why source coding concepts and techniques have become a standard tool in the area. This chapter presents four kinds of results: a rst very general consistency result in a Bayesian setting provides hints about the ideal penalties that could be used in penalized maximum likelihood order estimation. Then we provide a general construction for strongly consistent order estimators based on universal coding arguments. The third main result reports a recent tour de force by Csiszar and Shields (2000) who show that the Bayesian Information Criterion provides a strongly consistent Markov order estimator. We conclude by presenting a general framework for analyzing the Bahadur eciency of order estimation procedures following the line Gassiat and Boucheron (to appear). LRI UMR 8623 CNRS, Universite Paris-Sud Mathematiques, Universite Paris-Sud 2.1 Model Order Identi cation: what is it about ? In the preceding chapters, we have been concerned with inference problems in HMMs where t

    On simultaneous signal estimation and parameter identification using a generalized likelihood approach

    No full text

    Inference in finite state space non parametric hidden markov models and applications

    No full text
    Hidden Markov models (HMMs) are intensively used in various fields to model and classify data observed along a line (e.g. time). The fit of such models strongly relies on the choice of emission distributions that are most often chosen among some parametric family. In this paper, we prove that finite state space non parametric HMMs are identifiable as soon as the transition matrix of the latent Markov chain has full rank and the emission probability distributions are linearly independent. This general result allows the use of semi- or non-parametric emission distributions. Based on this result we present a series of classification problems that can be tackled out of the strict parametric framework. We derive the corresponding inference algorithms. We also illustrate their use on few biological examples, showing that they may improve the classification performances
    corecore